Combining Geometric Semantic GP with Gradient-Descent Optimization

نویسندگان

چکیده

Geometric semantic genetic programming (GSGP) is a well-known variant of (GP) where recombination and mutation operators have clear effect. Both kind randomly selected parameters that are not optimized by the search process. In this paper we combine GSGP with gradient-based optimizer, Adam, in order to leverage ability GP operate structural changes individuals methods optimize given structure. Two methods, named HYB-GSGP HeH-GSGP, defined compared on large set regression problems, showing use Adam can improve performance test set. The idea merging evolutionary computation optimization promising way combining two very different – complementary strengths.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Leaning by Combining Memorization and Gradient Descent

We have created a radial basis function network that allocates a new computational unit whenever an unusual pattern is presented to the network. The network learns by allocating new units and adjusting the parameters of existing units. If the network performs poorly on a presented pattern, then a new unit is allocated which memorizes the response to the presented pattern. If the network perform...

متن کامل

Mutiple-gradient Descent Algorithm for Multiobjective Optimization

The steepest-descent method is a well-known and effective single-objective descent algorithm when the gradient of the objective function is known. Here, we propose a particular generalization of this method to multi-objective optimization by considering the concurrent minimization of n smooth criteria {J i } (i = 1,. .. , n). The novel algorithm is based on the following observation: consider a...

متن کامل

Simple2Complex: Global Optimization by Gradient Descent

A method named simple2complex for modeling and training deep neural networks is proposed. Simple2complex train deep neural networks by smoothly adding more and more layers to the shallow networks, as the learning procedure going on, the network is just like growing. Compared with learning by end2end, simple2complex is with less possibility trapping into local minimal, namely, owning ability for...

متن کامل

Convex Optimization : A Gradient Descent Approach

This algorithm is introduced by [Zin]. Assuming {lt(·)}t=1 are differentiable: Starting with some arbitrary x1 ∈ X. For t = 1, ...,T • zt+1← xt − η 5 lt(xt). • xt+1←ΠX(zt+1), where ΠX(·) is the projection function. The performance of OGD is described as follows. Theorem .. If there exists some positive constant G,D such that || 5 lt(x)||2 ≤ G,∀t,x ∈ X (.) ||X ||2;= max x,y∈X ||x − y||2 ≤D...

متن کامل

An overview of gradient descent optimization algorithms

Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different vari...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-02056-8_2